Goto

Collaborating Authors

 max tegmark


'Godfather of AI' reveals the startling odds that artificial intelligence will take over humanity

Daily Mail - Science & tech

Scientist and physicist Geoffrey Hinton believes there could be a one in five chance that humanity will eventually be taken over by artificial intelligence. Hinton, a Nobel laureate in physics who's been dubbed the'godfather of AI', made the startling prediction in an April 1 interview with CBS News that was aired on Saturday morning. 'I'm in the unfortunate position of happening to agree with Elon Musk on this, which is that there's a 10 to 20 percent chance that these things will take over, but that's just a wild guess,' Hinton said. Besides his cost-cutting responsibilities in the federal government, Musk is the chief executive of xAI, the company that made the AI chatbot Grok. Musk has said AI will become smarter than the entire human race by 2029.


Towards Understanding Distilled Reasoning Models: A Representational Approach

Baek, David D., Tegmark, Max

arXiv.org Artificial Intelligence

In this paper, we investigate how model distillation impacts the development of reasoning features in large language models (LLMs). To explore this, we train a crosscoder on Qwen-series models and their fine-tuned variants. Our results suggest that the crosscoder learns features corresponding to various types of reasoning, including self-reflection and computation verification. Moreover, we observe that distilled models contain unique reasoning feature directions, which could be used to steer the model into over-thinking or incisive-thinking mode. In particular, we perform analysis on four specific reasoning categories: (a) self-reflection, (b) deductive reasoning, (c) alternative reasoning, and (d) contrastive reasoning. Finally, we examine the changes in feature geometry resulting from the distillation process and find indications that larger distilled models may develop more structured representations, which correlate with enhanced distillation performance. By providing insights into how distillation modifies the model, our study contributes to enhancing the transparency and reliability of AI systems.


Generalization from Starvation: Hints of Universality in LLM Knowledge Graph Learning

Baek, David D., Li, Yuxiao, Tegmark, Max

arXiv.org Artificial Intelligence

We show that these attractor representations optimize generalization to unseen examples by exploiting properties of knowledge graph relations (e.g. We find experimental support for such universality by showing that LLMs and simpler neural networks can be stitched, i.e., by stitching the first part of one model to the last part of another, mediated only by an affine or almost affine transformation. We hypothesize that this dynamic toward simplicity and generalization is driven by "intelligence from starvation": where overfitting is minimized by pressure to minimize the use of resources that are either scarce or competed for against other tasks. Large Language Models (LLMs), despite being primarily trained for next-token predictions, have shown impressive reasoning capabilities (Bubeck et al., 2023; Anthropic, 2024; Team et al., 2023). However, despite recent progress reviewed below, it is not well understood what knowledge LLMs represent internally and how they represent it. Improving such understanding could enable valuable progress relevant to transparency, interpretability, fairness and robustness, for example discovering and correcting inaccuracies to improve model reliability.


The pause AI movement is remarkable, but won't work

#artificialintelligence

The open letter calling for an immediate six-month pause in the AI development arms race and signed by more than 1600 tech luminaries, researchers and responsible technology advocates under the umbrella of the Future of Life Institute is stunning on its face. Self-reflection and caution have never been defining qualities of technology sector leaders. Outside of nuclear technology, it's hard to identify another time when so many have publicly rallied to slow the pace of technology development down, much less call for government regulation and intervention. "Advanced AI could represent a profound change in the history of life on Earth and should be planned for and managed with commensurate care and resources," the letter states. "Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control. "Therefore, we call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than (Open AI's) GPT-4.


Life 3.0 (Max Tegmark): Summary

#artificialintelligence

Throughout the time that life has been present on Earth, it has evolved through 2 stages, writes Tegmark, and it will soon move to the third stage. The stages are shown below. Life 1.0: knowledge is gained through evolution, physical properties also evolve, e.g. Life 2.0: physical properties evolve, however knowledge can be instantly gained, e.g. Life 3.0: Both knowledge and physical properties can be changed without the need to evolve.


Autonomous Weapons Are Here, but the World Isn't Ready for Them

#artificialintelligence

This may be remembered as the year when the world learned that lethal autonomous weapons had moved from a futuristic worry to a battlefield reality. It's also the year when policymakers failed to agree on what to do about it. On Friday, 120 countries participating in the United Nations' Convention on Certain Conventional Weapons could not agree on whether to limit the development or use of lethal autonomous weapons. Instead, they pledged to continue and "intensify" discussions. "It's very disappointing, and a real missed opportunity," says Neil Davison, senior scientific and policy adviser at the International Committee of the Red Cross, a humanitarian organization based in Geneva.


The big idea: Should we worry about artificial intelligence?

The Guardian

Ever since Garry Kasparov lost his second chess match against IBM's Deep Blue in 1997, the writing has been on the wall for humanity. Or so some like to think. Advances in artificial intelligence will lead – by some estimates, in only a few decades – to the development of superintelligent, sentient machines. Movies from The Terminator to The Matrix have portrayed this prospect as rather undesirable. But is this anything more than yet another sci-fi "Project Fear"?


We have to synthetically evolve or we're doomed.

#artificialintelligence

Max Tegmark is a Swedish American physicist cosmologist and machine learning researcher at MIT. He thinks that AI will redefine what it means to be human due to the scale of the changes it will bring about. During the past 13.8 billion years our universe has transformed from dead and boring to complex and interesting and it has the opportunity to get dramatically more interesting in the future if we don't screw up. About four billion years ago life first appeared here on earth but it was pretty dumb stuff like bacteria that couldn't really learn anything in their lifetime. Max calls them Life 1.0.


AI Futures

Communications of the ACM

"AlphaZero crushes chess!" scream the headlinesa as the AlphaZero algorithm developed by Google and DeepMind took just four hours of playing against itself (with no human help) to defeat the reigning World Computer Champion Stockfish by 28 wins to 0 in a 100-game match. Only four hours to recreate the chess knowledge of one and a half millennium of human creativity! This followed the announcement just weeks earlier that their program AlphaGoZero had, starting from scratch, with no human inputs at all, comprehensively beaten the previous version AlphaGo, which in turn had spectacularly beaten one of the world's top Go players, Lee Seedol, 4-1 in a match in Seoul, Korea, in March 2016. Interest in AI has reached fever pitch in the popular imagination--its opportunities and its threats. The time is ripe for books on AI and what it holds for our future such as Life 3.0: Being Human in the Age of Artificial Intelligence by Max Tegmark, Android Dreams by Toby Walsh, and Artificial Intelligence by Melanie Mitchell.6,8,9


The Best Resources on Artificial Intelligence and Machine Learning

#artificialintelligence

Half of this crazy year is behind us and summer is here. Over the years, we machine learning engineers at Ximilar have gathered a lot of interesting ML/AI material from which we draw. I have chosen the best ones from podcasts to online courses that I recommend to listen to, read, and check. Some of them are introductory, others more advanced. However, all of them are high-quality ones made by the best people in the field and they are worth checking.